Answering complex questions often requires multi-step reasoning in order to obtain the final answer. Most research into decompositions of complex questions involves open-domain systems, which have shown success in using these decompositions for improved retrieval. In the machine reading setting, however, work to understand when decompositions are helpful is understudied. We conduct experiments on decompositions in machine reading to unify recent work in this space, using a range of models and datasets. We find that decompositions can be helpful in the few-shot case, giving several points of improvement in exact match scores. However, we also show that when models are given access to datasets with around a few hundred or more examples, decompositions are not helpful (and can actually be detrimental). Thus, our analysis implies that models can learn decompositions implicitly even with limited data.
translated by 谷歌翻译
Recent work in open-domain question answering (ODQA) has shown that adversarial poisoning of the input contexts can cause large drops in accuracy for production systems. However, little to no work has proposed methods to defend against these attacks. To do so, we introduce a new method that uses query augmentation to search for a diverse set of retrieved passages that could answer the original question. We integrate these new passages into the model through the design of a novel confidence method, comparing the predicted answer to its appearance in the retrieved contexts (what we call Confidence from Answer Redundancy, e.g. CAR). Together these methods allow for a simple but effective way to defend against poisoning attacks and provide gains of 5-20% exact match across varying levels of data poisoning.
translated by 谷歌翻译
Post-hoc explanation methods are used with the intent of providing insights about neural networks and are sometimes said to help engender trust in their outputs. However, popular explanations methods have been found to be fragile to minor perturbations of input features or model parameters. Relying on constraint relaxation techniques from non-convex optimization, we develop a method that upper-bounds the largest change an adversary can make to a gradient-based explanation via bounded manipulation of either the input features or model parameters. By propagating a compact input or parameter set as symbolic intervals through the forwards and backwards computations of the neural network we can formally certify the robustness of gradient-based explanations. Our bounds are differentiable, hence we can incorporate provable explanation robustness into neural network training. Empirically, our method surpasses the robustness provided by previous heuristic approaches. We find that our training method is the only method able to learn neural networks with certificates of explanation robustness across all six datasets tested.
translated by 谷歌翻译
Neural network interpretation methods, particularly feature attribution methods, are known to be fragile with respect to adversarial input perturbations. To address this, several methods for enhancing the local smoothness of the gradient while training have been proposed for attaining \textit{robust} feature attributions. However, the lack of considering the normalization of the attributions, which is essential in their visualizations, has been an obstacle to understanding and improving the robustness of feature attribution methods. In this paper, we provide new insights by taking such normalization into account. First, we show that for every non-negative homogeneous neural network, a naive $\ell_2$-robust criterion for gradients is \textit{not} normalization invariant, which means that two functions with the same normalized gradient can have different values. Second, we formulate a normalization invariant cosine distance-based criterion and derive its upper bound, which gives insight for why simply minimizing the Hessian norm at the input, as has been done in previous work, is not sufficient for attaining robust feature attribution. Finally, we propose to combine both $\ell_2$ and cosine distance-based criteria as regularization terms to leverage the advantages of both in aligning the local gradient. As a result, we experimentally show that models trained with our method produce much more robust interpretations on CIFAR-10 and ImageNet-100 without significantly hurting the accuracy, compared to the recent baselines. To the best of our knowledge, this is the first work to verify the robustness of interpretation on a larger-scale dataset beyond CIFAR-10, thanks to the computational efficiency of our method.
translated by 谷歌翻译
社交媒体的日益普及引起了人们对儿童在线安全的关注。未成年人与具有掠夺性意图的成年人之间的互动是一个特别严重的关注点。在线性修饰的研究通常依靠领域专家来手动注释对话,从而限制了规模和范围。在这项工作中,我们测试了良好的方法如何检测对话行为并取代专家的人类注释。在在线修饰的心理理论中,我们将$ 6772的$ 6772 $聊天消息标记为儿童性犯罪者以十一种掠夺性行为之一发送的聊天消息。我们训练字袋和自然语言推断模型来对每种行为进行分类,并表明,最佳性能模型以一致但不与人类注释的方式分类的方式对行为进行了分类。
translated by 谷歌翻译
部署AI驱动的系统需要支持有效人类互动的值得信赖的模型,超出了原始预测准确性。概念瓶颈模型通过在类似人类的概念的中间级别调节分类任务来促进可信度。这使得人类干预措施可以纠正错误预测的概念以改善模型的性能。但是,现有的概念瓶颈模型无法在高任务准确性,基于概念的强大解释和对概念的有效干预措施之间找到最佳的妥协,尤其是在稀缺完整和准确的概念主管的现实情况下。为了解决这个问题,我们提出了概念嵌入模型,这是一种新型的概念瓶颈模型,它通过学习可解释的高维概念表示形式而超出了当前的准确性-VS解关性权衡。我们的实验表明,嵌入模型(1)达到更好或竞争性的任务准确性W.R.T. W.R.T.没有概念的标准神经模型,(2)提供概念表示,以捕获有意义的语义,包括其地面真相标签,(3)支持测试时间概念干预措施,其在测试准确性中的影响超过了标准概念瓶颈模型,以及(4)规模对于稀缺的完整概念监督的现实条件。
translated by 谷歌翻译
本文考虑了从野外单视图像中无监督的3D对象重建的问题。由于歧义性和内在的不良性,这个问题本质上难以解决,因此需要强大的正则化以实现不同潜在因素的分离。与现有的作品将明确的正规化引入目标功能不同,我们研究了一个不同的空间进行隐式正则化 - 潜在空间的结构。具体而言,我们限制了潜在空间的结构,以捕获潜在因素的拓扑因果排序(即代表因果关系作为定向无环形图)。我们首先表明,不同的因果顺序对于3D重建至关重要,然后探索几种方法以找到与任务有关的因果因素排序。我们的实验表明,潜在空间结构确实是隐式正规化,并引入了有益于重建的电感偏见。
translated by 谷歌翻译
知识图已成为以人类和机器可解开方式管理和标准化半结构域知识的有效工具。在基于图的域应用程序(例如嵌入式和图形神经网络)方面,当前的研究越来越多地考虑到图表中编码的信息的时间相关的演变。扩展了固定和静态知识图的算法和模型,以使其适合时间感知域,其中可以以不同的方式解释时间意识。特别是,有效期和事实的可追溯性是与时间相关的知识图扩展的目标之间的区别。在这种情况下,在文献中通常不一致或互换地使用术语和定义,例如动态和时间。因此,借助本文,我们旨在提供时间吸引的知识图形扩展的简短但定义明确的概述,从而促进该领域的未来研究。
translated by 谷歌翻译
用于训练机器学习(ML)模型的标签至关重要。通常,对于ML分类任务,数据集包含硬标签,但已证明使用软标签的学习可以产生模型概括,鲁棒性和校准的好处。较早的工作发现从多个注释者的硬标签形成软标签方面的成功;但是,这种方法可能不会融合到最佳标签,因此需要许多注释者,这可能是昂贵且效率低下的。我们专注于有效地从单个注释者那里引起软标签。我们通过众包研究($ n = 242 $)收集并发布了CIFAR-10的软标签数据集。我们证明,使用标签学习可以实现可比的模型性能与先前的方法,同时需要更少的注释者。因此,我们的启发方法表明,有望使从业者能够通过更少的注释来享受改善模型性能和可靠性的好处,并为将来的数据集策展人提供指南,以了解从单个注释者那里利用更丰富信息(例如分类不确定性)的好处。
translated by 谷歌翻译
研究深度学习的鲁棒性的一个主要挑战是定义了给定神经网络(NN)不变的``毫无意义''扰动集。关于鲁棒性的大多数工作隐含地将人作为参考模型来定义这种扰动。我们的工作通过使用另一个参考NN来定义给定的NN应该不变,从而使对任何NN的依赖概述对任何NN的依赖。这使得衡量鲁棒性等同于衡量两个NN共享不稳定的程度,我们提出了一种称为搅拌的措施。搅拌重新调整现有的表示相似性措施,使其适合衡量共享的不稳定。使用我们的度量,我们能够深入了解共享的不断增长,随着重量初始化,体系结构,损失功能和培训数据集的变化如何变化。我们的实现可在:\ url {https://github.com/nvedant07/stir}中获得。
translated by 谷歌翻译